tactile sensor
Graphene-based sensor to improve robot touch
Multiscale-structured miniaturized 3D force sensors CC BY 4.0 Robots are becoming increasingly capable in vision and movement, yet touch remains one of their major weaknesses. Now, researchers have developed a miniature tactile sensor that could give robots something much closer to a human sense of touch. The technology, developed by researchers at the University of Cambridge, is based on liquid metal composites and graphene - a two-dimensional form of carbon. The'skin' allows robots to detect not just how hard they are pressing on an object, but also the direction of applied forces, whether an object is slipping, and even how rough a surface is, at a scale small enough to rival the spatial resolution of human fingertips. Their results are reported in the journal .
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.38)
- Asia > China (0.05)
- North America > United States > Michigan (0.05)
- Europe > Italy > Calabria > Catanzaro Province > Catanzaro (0.04)
- North America > United States > Virginia (0.04)
- Information Technology (0.68)
- Law (0.67)
- Information Technology > Sensing and Signal Processing > Image Processing (1.00)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.67)
TouchandGo: Learningfrom Human-CollectedVisionandTouch SupplementaryMaterial
We've provided a webpage for our dataset, which contains a link to the dataset. Our dataset is currently available through our webpage (and directly via this link). We use a learning rate of 0.01 for ResNet-18 and0.1forResNet-50. This loss is motivated by recent contrastive learning to maximize the probability for the neural network to select the corresponding patch in both the original imagexI and the generated image ˆxI. For reference, we also show the image that corresponds to the tactile example at rightmost (not used by the model).
Simultaneous Tactile-Visual Perception for Learning Multimodal Robot Manipulation
Li, Yuyang, Chen, Yinghan, Zhao, Zihang, Li, Puhao, Liu, Tengyu, Huang, Siyuan, Zhu, Yixin
Robotic manipulation requires both rich multimodal perception and effective learning frameworks to handle complex real-world tasks. See-through-skin (STS) sensors, which combine tactile and visual perception, offer promising sensing capabilities, while modern imitation learning provides powerful tools for policy acquisition. However, existing STS designs lack simultaneous multimodal perception and suffer from unreliable tactile tracking. Furthermore, integrating these rich multimodal signals into learning-based manipulation pipelines remains an open challenge. We introduce TacThru, an STS sensor enabling simultaneous visual perception and robust tactile signal extraction, and TacThru-UMI, an imitation learning framework that leverages these multimodal signals for manipulation. Our sensor features a fully transparent elastomer, persistent illumination, novel keyline markers, and efficient tracking, while our learning system integrates these signals through a Transformer-based Diffusion Policy. Experiments on five challenging real-world tasks show that TacThru-UMI achieves an average success rate of 85.5%, significantly outperforming the baselines of alternating tactile-visual (66.3%) and vision-only (55.4%). The system excels in critical scenarios, including contact detection with thin and soft objects and precision manipulation requiring multimodal coordination. This work demonstrates that combining simultaneous multimodal perception with modern learning frameworks enables more precise, adaptable robotic manipulation.
- Asia > China > Hubei Province > Wuhan (0.04)
- Asia > China > Beijing > Beijing (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
OSMO: Open-Source Tactile Glove for Human-to-Robot Skill Transfer
Yin, Jessica, Qi, Haozhi, Wi, Youngsun, Kundu, Sayantan, Lambeta, Mike, Yang, William, Wang, Changhao, Wu, Tingfan, Malik, Jitendra, Hellebrekers, Tess
Abstract-- Human video demonstrations provide abundant training data for learning robot policies, but video alone cannot capture the rich contact signals critical for mastering manipulation. We introduce OSMO, an open-source wearable tactile glove designed for human-to-robot skill transfer . The glove features 12 three-axis tactile sensors across the fingertips and palm and is designed to be compatible with state-of-the-art hand-tracking methods for in-the-wild data collection. We demonstrate that a robot policy trained exclusively on human demonstrations collected with OSMO, without any real robot data, is capable of executing a challenging contact-rich manipulation task. On a real-world wiping task requiring sustained contact pressure, our tactile-aware policy achieves a 72% success rate, outperforming vision-only baselines by eliminating contact-related failure modes. We release complete hardware designs, firmware, and assembly instructions to support community adoption. Tactile sensing enables humans to excel at manipulation by providing real-time feedback about contact forces that vision alone cannot capture. Consider trying to dice a carrot from video alone; one cannot observe the nuanced force control that makes the task successful. Many different applied forces can result in nearly identical visual appearances, leaving critical information about force control invisible to vision.
- North America > United States > Pennsylvania (0.04)
- North America > United States > Michigan (0.04)
- Asia > Japan > Honshū > Chūbu > Ishikawa Prefecture > Kanazawa (0.04)
TacFinRay: Soft Tactile Fin-Ray Finger with Indirect Tactile Sensing for Robust Grasping
Nam, Saekwang, Deng, Bowen, Lee, Loong Yi, Rossiter, Jonathan M., Lepora, Nathan F.
Abstract--We present a tactile-sensorized Fin-Ray finger that enables simultaneous detection of contact location and indentation depth through an indirect sensing approach. A hinge mechanism is integrated between the soft Fin-Ray structure and a rigid sensing module, allowing deformation and translation information to be transferred to a bottom crossbeam upon which are an array of marker-tipped pins based on the biomimetic structure of the T acTip vision-based tactile sensor . Deformation patterns captured by an internal camera are processed using a convolutional neural network to infer contact conditions without directly sensing the finger surface. The finger design was optimized by varying pin configurations and hinge orientations, achieving 0.1 mm depth and 2 mm location-sensing accuracies. The perception demonstrated robust generalization to various indenter shapes and sizes, which was applied to a pick-and-place task under uncertain picking positions, where the tactile feedback significantly improved placement accuracy. Overall, this work provides a lightweight, flexible, and scalable tactile sensing solution suitable for soft robotic structures where the sensing needs situating away from the contact interface. I. INTRODUCTION Tactile sensing is essential for achieving dexterous manipulation in robotic hands [1], [2]. For example, to perform delicate tasks like gently grasping and placing eggs or glass plates, humanoid robots such as Figure's F.02 and Tesla's Optimus will need fingertip-mounted tactile sensors to become truly capable [3]. To enhance robotic dexterity, researchers have developed vision-based tactile sensors (VBTSs) that take advantage of recent advancements in computer vision [4]-[7].
- Europe > United Kingdom > England > Bristol (0.04)
- Asia > South Korea > Daegu > Daegu (0.04)
MagicSkin: Balancing Marker and Markerless Modes in Vision-Based Tactile Sensors with a Translucent Skin
Tijani, Oluwatimilehin, Chen, Zhuo, Deng, Jiankang, Luo, Shan
Vision-based tactile sensors (VBTS) face a fundamental trade-off in marker and markerless design on the tactile skin: opaque ink markers enable measurement of force and tangential displacement but completely occlude geometric features necessary for object and texture classification, while markerless skin preserves surface details but struggles in measuring tangential displacements effectively. Current practice to solve the above problem via UV lighting or virtual transfer using learning-based models introduces hardware complexity or computing burdens. This paper introduces MagicSkin, a novel tactile skin with translucent, tinted markers balancing the modes of marker and markerless for VBTS. It enables simultaneous tangential displacement tracking, force prediction, and surface detail preservation. This skin is easy to plug into GelSight-family sensors without requiring additional hardware or software tools. We comprehensively evaluate MagicSkin in downstream tasks. The translucent markers impressively enhance rather than degrade sensing performance compared with traditional markerless and inked marker design: it achieves best performance in object classification (99.17\%), texture classification (93.51\%), tangential displacement tracking (97\% point retention) and force prediction (66\% improvement in total force error). These experimental results demonstrate that translucent skin eliminates the traditional performance trade-off in marker or markerless modes, paving the way for multimodal tactile sensing essential in tactile robotics. See videos at this \href{https://zhuochenn.github.io/MagicSkin_project/}{link}.
- Europe > United Kingdom (0.14)
- Europe > France > Île-de-France > Paris > Paris (0.04)
Magnetic Tactile-Driven Soft Actuator for Intelligent Grasping and Firmness Evaluation
Du, Chengjin, Bernabei, Federico, Du, Zhengyin, Decherchi, Sergio, Preti, Matteo Lo, Beccai, Lucia
Soft robots are powerful tools for manipulating delicate objects, yet their adoption is hindered by two gaps: the lack of integrated tactile sensing and sensor signal distortion caused by actuator deformations. This paper addresses these challenges by introducing the SoftMag actuator: a magnetic tactile-sensorized soft actuator. Unlike systems relying on attached sensors or treating sensing and actuation separately, SoftMag unifies them through a shared architecture while confronting the mechanical parasitic effect, where deformations corrupt tactile signals. A multiphysics simulation framework models this coupling, and a neural-network-based decoupling strategy removes the parasitic component, restoring sensing fidelity. Experiments including indentation, quasi-static and step actuation, and fatigue tests validate the actuator's performance and decoupling effectiveness. Building upon this foundation, the system is extended into a two-finger SoftMag gripper, where a multi-task neural network enables real-time prediction of tri-axial contact forces and position. Furthermore, a probing-based strategy estimates object firmness during grasping. Validation on apricots shows a strong correlation (Pearson r over 0.8) between gripper-estimated firmness and reference measurements, confirming the system's capability for non-destructive quality assessment. Results demonstrate that combining integrated magnetic sensing, learning-based correction, and real-time inference enables a soft robotic platform that adapts its grasp and quantifies material properties. The framework offers an approach for advancing sensorized soft actuators toward intelligent, material-aware robotics.
- Asia > China > Guangdong Province > Guangzhou (0.04)
- Asia > Singapore (0.04)
- North America > United States > California (0.04)
- (3 more...)
- Health & Medicine (0.93)
- Food & Agriculture > Agriculture (0.46)
- Consumer Products & Services > Food, Beverage, Tobacco & Cannabis (0.46)
- Materials > Chemicals > Commodity Chemicals (0.46)
High-Speed Event Vision-Based Tactile Roller Sensor for Large Surface Measurements
Khairi, Akram, Sajwani, Hussain, Alkilany, Abdallah Mohammad, AbuAssi, Laith, Halwani, Mohamad, Zaid, Islam Mohamed, Awadalla, Ahmed, Swart, Dewald, Ayyad, Abdulla, Zweiri, Yahya
Abstract-- Inspecting large-scale industrial surfaces like aircraft fuselages for quality control requires precise, high-resolution 3D geometry. Vision-based tactile sensors (VBTSs) offer high local resolution but require slow'press-and-lift' measurements for large areas. Sliding or roller/belt VBTS designs provide continuous measurement but face significant challenges: sliding suffers from friction/wear, while both are speed-limited by camera frame rates and motion blur . Thus, a rapid, continuous, high-resolution method is needed. We introduce a novel neuromorphic tactile roller sensor . It uses a modified event-based multi-view stereo algorithm for 3D reconstruction, leveraging high temporal resolution and motion blur robustness. This reconstruction is most effective for surfaces with distinct edges or sharp features, which are often the most critical for defect detection in industrial inspection tasks. We demonstrate 0.5 m/s scanning speeds with MAE below 100 µm (11x faster than prior methods). A multi-reference Bayesian fusion strategy reduces MAE by 25.2% (vs. Surface metrology and surface inspection are crucial elements in quality assurance across diverse industries, particularly aerospace and automotive manufacturing. Precise inspection is required to identify characteristics like paint quality, coating integrity, and subtle defects such as cracks, nicks, and dents [1], [2], [3]. Often, achieving a resolution of 0.1 mm or lower is necessary to accurately classify these features and ensure component integrity and safety [4]. Traditional contact-based methods, including high-precision profilometers [5], [6] or microscopic techniques [7], [8], [9], offer high resolution locally but become exceedingly time-consuming when applied to large surface areas due to their sequential, point-by-point or small-patch measurement nature. Non-contact optical methods, such as cameras, laser scanners, or structured light systems [2], [10], [11], [12], [13], [14], can significantly accelerate inspection by capturing data over wider areas. However, these methods often lack robustness; their performance can be compromised by variations in ambient lighting, motion blur when attempting high-speed scanning, or challenging surface optical properties like high reflectivity or transparency [15].
- North America > United States > Kansas > Sheridan County (0.04)
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.04)
Gentle Object Retraction in Dense Clutter Using Multimodal Force Sensing and Imitation Learning
Brouwer, Dane, Citron, Joshua, Nolte, Heather, Bohg, Jeannette, Cutkosky, Mark
Dense collections of movable objects are common in everyday spaces-from cabinets in a home to shelves in a warehouse. Safely retracting objects from such collections is difficult for robots, yet people do it frequently, leveraging learned experience in tandem with vision and non-prehensile tactile sensing on the sides and backs of their hands and arms. We investigate the role of contact force sensing for training robots to gently reach into constrained clutter and extract objects. The available sensing modalities are (1) "eye-in-hand" vision, (2) proprioception, (3) non-prehensile triaxial tactile sensing, (4) contact wrenches estimated from joint torques, and (5) a measure of object acquisition obtained by monitoring the vacuum line of a suction cup. We use imitation learning to train policies from a set of demonstrations on randomly generated scenes, then conduct an ablation study of wrench and tactile information. We evaluate each policy's performance across 40 unseen environment configurations. Policies employing any force sensing show fewer excessive force failures, an increased overall success rate, and faster completion times. The best performance is achieved using both tactile and wrench information, producing an 80% improvement above the baseline without force information.